12,084 research outputs found

    Active Learning with Statistical Models

    Get PDF
    For many types of machine learning algorithms, one can compute the statistically `optimal' way to select training data. In this paper, we review how optimal data selection techniques have been used with feedforward neural networks. We then show how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression. While the techniques for neural networks are computationally expensive and approximate, the techniques for mixtures of Gaussians and locally weighted regression are both efficient and accurate. Empirically, we observe that the optimality criterion sharply decreases the number of training examples the learner needs in order to achieve good performance.Comment: See http://www.jair.org/ for any accompanying file

    Mean Field Theory for Sigmoid Belief Networks

    Get PDF
    We develop a mean field theory for sigmoid belief networks based on ideas from statistical mechanics. Our mean field theory provides a tractable approximation to the true probability distribution in these networks; it also yields a lower bound on the likelihood of evidence. We demonstrate the utility of this framework on a benchmark problem in statistical pattern recognition---the classification of handwritten digits.Comment: See http://www.jair.org/ for any accompanying file

    Dynamic Poisson Factorization

    Full text link
    Models for recommender systems use latent factors to explain the preferences and behaviors of users with respect to a set of items (e.g., movies, books, academic papers). Typically, the latent factors are assumed to be static and, given these factors, the observed preferences and behaviors of users are assumed to be generated without order. These assumptions limit the explorative and predictive capabilities of such models, since users' interests and item popularity may evolve over time. To address this, we propose dPF, a dynamic matrix factorization model based on the recent Poisson factorization model for recommendations. dPF models the time evolving latent factors with a Kalman filter and the actions with Poisson distributions. We derive a scalable variational inference algorithm to infer the latent factors. Finally, we demonstrate dPF on 10 years of user click data from arXiv.org, one of the largest repository of scientific papers and a formidable source of information about the behavior of scientists. Empirically we show performance improvement over both static and, more recently proposed, dynamic recommendation models. We also provide a thorough exploration of the inferred posteriors over the latent variables.Comment: RecSys 201

    Nested Hierarchical Dirichlet Processes

    Full text link
    We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP is a generalization of the nested Chinese restaurant process (nCRP) that allows each word to follow its own path to a topic node according to a document-specific distribution on a shared tree. This alleviates the rigid, single-path formulation of the nCRP, allowing a document to more easily express thematic borrowings as a random effect. We derive a stochastic variational inference algorithm for the model, in addition to a greedy subtree selection method for each document, which allows for efficient inference using massive collections of text documents. We demonstrate our algorithm on 1.8 million documents from The New York Times and 3.3 million documents from Wikipedia.Comment: To appear in IEEE Transactions on Pattern Analysis and Machine Intelligence, Special Issue on Bayesian Nonparametric

    Ratio data: Understanding pitfalls and knowing when to standardise

    Get PDF
    Ratios represent a single-value metric but consist of two component parts: a numerator variable and a denominator variable. Strictly speaking, a ratio is defined as: “the quantitative relation between two amounts showing the number of times one value contains or is contained by another”. When we discuss symmetry in sport science, we are generally comparing values of some metric between left and right sides or between agonist and antagonist muscles. The typical practice is to express the comparison as a ratio (differences are also a way of standardizing under different assumptions), such as the injured limb having only 60% of the strength of the uninjured limb. Conceptually though, we are using the ratio as one way to standardize the value of one variable with respect to another. Despite their common use, the interpretation of ratio standardisation, whether for symmetry or other reasons, often provides challenges, some of which are not always obvious to practitioners. Typically, when monitoring a change in ratios, if an intervention affects both the numerator and denominator, there will likely be challenges in interpreting the ratio appropriately. Therefore, the aim of this editorial is to use some examples to highlight when using this form of standardisation may be helpful, and when using it can lead to misinterpretations

    Neural Networks

    Get PDF
    We present an overview of current research on artificial neural networks, emphasizing a statistical perspective. We view neural networks as parameterized graphs that make probabilistic assumptions about data, and view learning algorithms as methods for finding parameter values that look probable in the light of the data. We discuss basic issues in representation and learning, and treat some of the practical issues that arise in fitting networks to data. We also discuss links between neural networks and the general formalism of graphical models
    corecore